Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Plast Reconstr Surg Glob Open ; 12(2): e5575, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38313589

RESUMO

Background: To address patient health literacy, the American Medical Association recommends that readability of patient education materials should not exceed a sixth grade reading level; the National Institutes of Health recommend no greater than an eigth-grade reading level. However, patient-facing materials in plastic surgery often remain at an above-recommended average reading level. The purpose of this study was to evaluate ChatGPT 3.5 as a tool for optimizing patient-facing craniofacial education materials. Methods: Eighteen patient-facing craniofacial education materials were evaluated for readability by a traditional calculator and ChatGPT 3.5. The resulting scores were compared. The original excerpts were then inputted to ChatGPT 3.5 and simplified by the artificial intelligence tool. The simplified excerpts were scored by the calculators. Results: The difference in scores for the original excerpts between the online calculator and ChatGPT 3.5 were not significant (P = 0.441). Additionally, the simplified excerpts' scores were significantly lower than the originals (P < 0.001), and the mean of the simplified excerpts was 7.78, less than the maximum recommended 8. Conclusions: The use of ChatGPT 3.5 for simplification and readability analysis of patient-facing craniofacial materials is efficient and may help facilitate the conveyance of important health information. ChatGPT 3.5 rendered readability scores comparable to traditional readability calculators, in addition to excerpt-specific feedback. It was also able to simplify materials to the recommended grade levels. With human oversight, we validate this tool for readability analysis and simplification.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...